harm people
La veille de la cybersécurité
Washington and Brussels are both preparing for a future dominated by artificial intelligence -- but first, they need to get out of each other's way. Tech regulators on both sides of the Atlantic hope to prevent a split on AI rules like one seen on data privacy, where regulators in Europe got out ahead of their U.S. counterparts and sparked all kinds of havoc that continue to threaten transatlantic data flows. "There is a lot of interest to avoid having segmented approaches," said Elham Tabassi, chief of staff in the Information Technology Laboratory at the National Institute of Standards and Technology. But regulators in the EU and U.S. are already taking different approaches to the multi-trillion-dollar transatlantic tech economy. The EU is plowing ahead with mandatory AI rules meant to safeguard privacy and civil rights while the U.S. focuses on voluntary guidelines. And there's another fundamental divide -- the U.S. wants to promote ethical research and use of the technology while Europe focuses on potentially banning, restricting or auditing specific lines of code.
- Information Technology > Security & Privacy (0.61)
- Government (0.61)
The transatlantic AI divide
Washington and Brussels are both preparing for a future dominated by artificial intelligence -- but first, they need to get out of each other's way. Tech regulators on both sides of the Atlantic hope to prevent a split on AI rules like one seen on data privacy, where regulators in Europe got out ahead of their U.S. counterparts and sparked all kinds of havoc that continue to threaten transatlantic data flows. "There is a lot of interest to avoid having segmented approaches," said Elham Tabassi, chief of staff in the Information Technology Laboratory at the National Institute of Standards and Technology. But regulators in the EU and U.S. are already taking different approaches to the multi-trillion-dollar transatlantic tech economy. The EU is plowing ahead with mandatory AI rules meant to safeguard privacy and civil rights while the U.S. focuses on voluntary guidelines.
- North America > United States (1.00)
- Europe (0.28)
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
It would be impossible to pull the plug on AI that wanted to harm humans, scientists warn
The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it. A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity. However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm's own operations. Iyad Rahwan, Director of the Center for Humans and Machines, said: 'If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.' 'In effect, this makes the containment algorithm unusable.' AI has been fascinating humans for years, as we are in awe by machines that control cars, compose symphonies or beat the world's best chess player at their own game.
This Robot Intentionally Hurts People--And Makes Them Bleed
Asimov's First Law of Robotics is very clear: Robots may not harm people. Although there are certainly plenty of large robots, often used in manufacturing, that one would have to consider dangerous, roboticists have generally hewed to that rule. The "law," penned by science-fiction giant Isaac Asimov in his 1942 short story Runaround, was one of three rules, the second of which reads, "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law." To be sure, accidents involving robots happen, like when someone gets too close to an industrial robot. But now a Berkeley, California man wants to start a robust conversation among ethicists, philosophers, lawyers, and others about where technology is going--and what dangers robots will present humanity in the future.